Skip to content

Conversation

@postnati
Copy link

This PR modernizes the squid-mixin to use grafonnet v11 and the signals architecture pattern.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@postnati postnati requested a review from schmikei October 30, 2025 17:50
@postnati postnati self-assigned this Oct 30, 2025
@postnati postnati changed the title Modernize squid-mixin to grafonnet v11 and signals architecture chore: Modernize squid-mixin to grafonnet v11 and signals architecture Oct 30, 2025
@postnati postnati changed the title chore: Modernize squid-mixin to grafonnet v11 and signals architecture chore: Modernize squid-mixin Oct 30, 2025
Copy link
Contributor

@schmikei schmikei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking closer! Think I've caught a couple things that we should fix though. Will sync up with you async on these

Comment on lines +2 to +5
template-job-rule:
reason: "Prometheus datasource variable is being named as prometheus_datasource now while linter expects 'datasource'"
panel-datasource-rule:
reason: "Modern mixins use signal-based architecture where datasource references are handled by the framework"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is wrong. Ideally our refactor shouldn't really touch the .lint file

this.grafana.rows.serverRow,
]
+
if this.config.enableLokiLogs then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be using the logs-lib library here instead of combined logs/metrics panels.

@@ -0,0 +1,3 @@
// grafonnet must be imported with "g" alias
local g = import './vendor/github.com/grafana/grafonnet/gen/grafonnet-v11.0.0/main.libsonnet';
g
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
g
// grafonnet must be imported with "g" alias
local g = import './vendor/github.com/grafana/grafonnet/gen/grafonnet-v11.0.0/main.libsonnet';
g

This is the wrong import syntax: example https://github.com/grafana/jsonnet-libs/blob/master/apache-activemq-mixin/g.libsonnet

{
new(this):
{
overview: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the incorrect way of structuring the dashboard link.

Follow the pattern here: https://github.com/grafana/jsonnet-libs/blob/master/apache-activemq-mixin/links.libsonnet

"links": [ ],
"links": [
{
"keepTime": true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Squid overview has a link to itself? I think that is incorrect and probably indicative of something wrong in links.libsonnet.

multiclusterSelector: 'job=~"$job"',
squidSelector: if self.enableMultiCluster then 'job=~"$job", cluster=~"$cluster"' else 'job=~"$job"',
// Basic filtering
filteringSelector: 'job=~"$job", instance=~"$instance"',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the incorrect filtering selector. I believe its default should probably be

Suggested change
filteringSelector: 'job=~"$job", instance=~"$instance"',
filteringSelector: 'job="integrations/squid"',

"uid": "${datasource}"
},
"expr": "rate(squid_server_other_kbytes_in_kbytes_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])",
"expr": "avg by (job,instance) (\n rate(squid_server_other_kbytes_in_kbytes_total{job=~\"$job\", instance=~\"$instance\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])\n)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before we weren't averaging this. Was this an intentional change?

"uid": "${datasource}"
},
"expr": "rate(squid_server_http_kbytes_in_kbytes_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])",
"expr": "avg by (job,instance) (\n rate(squid_server_http_kbytes_in_kbytes_total{job=~\"$job\", instance=~\"$instance\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])\n)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here about averaging when before we were doing a pure rate.

"uid": "${datasource}"
},
"expr": "rate(squid_swap_outs_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])",
"expr": "avg by (job,instance) (\n rate(squid_swap_outs_total{job=~\"$job\", instance=~\"$instance\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])\n)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping on average vs rate

"uid": "${datasource}"
},
"expr": "rate(squid_swap_ins_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])",
"expr": "avg by (job,instance) (\n rate(squid_swap_ins_total{job=~\"$job\", instance=~\"$instance\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])\n)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will let you read through the diff here but similar about avg when should just keep the rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants